Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Heliyon ; 10(5): e26416, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38468957

RESUMO

The emergence of federated learning (FL) technique in fog-enabled healthcare system has leveraged enhanced privacy towards safeguarding sensitive patient information over heterogeneous computing platforms. In this paper, we introduce the FedHealthFog framework, which was meticulously developed to overcome the difficulties of distributed learning in resource-constrained IoT-enabled healthcare systems, particularly those sensitive to delays and energy efficiency. Conventional federated learning approaches face challenges stemming from substantial compute requirements and significant communication costs. This is primarily due to their reliance on a singular server for the aggregation of global data, which results in inefficient training models. We present a transformational approach to address these problems by elevating strategically placed fog nodes to the position of local aggregators within the federated learning architecture. A sophisticated greedy heuristic technique is used to optimize the choice of a fog node as the global aggregator in each communication cycle between edge devices and the cloud. The FedHealthFog system notably accounts for drop in communication latency of 87.01%, 26.90%, and 71.74%, and energy consumption of 57.98%, 34.36%, and 35.37% respectively, for three benchmark algorithms analyzed in this study. The effectiveness of FedHealthFog is strongly supported by outcomes of our experiments compared to cutting-edge alternatives while simultaneously reducing number of global aggregation cycles. These findings highlight FedHealthFog's potential to transform federated learning in resource-constrained IoT environments for delay-sensitive applications.

2.
Sci Rep ; 14(1): 6589, 2024 03 19.
Artigo em Inglês | MEDLINE | ID: mdl-38504098

RESUMO

Identifying and recognizing the food on the basis of its eating sounds is a challenging task, as it plays an important role in avoiding allergic foods, providing dietary preferences to people who are restricted to a particular diet, showcasing its cultural significance, etc. In this research paper, the aim is to design a novel methodology that helps to identify food items by analyzing their eating sounds using various deep learning models. To achieve this objective, a system has been proposed that extracts meaningful features from food-eating sounds with the help of signal processing techniques and deep learning models for classifying them into their respective food classes. Initially, 1200 audio files for 20 food items labeled have been collected and visualized to find relationships between the sound files of different food items. Later, to extract meaningful features, various techniques such as spectrograms, spectral rolloff, spectral bandwidth, and mel-frequency cepstral coefficients are used for the cleaning of audio files as well as to capture the unique characteristics of different food items. In the next phase, various deep learning models like GRU, LSTM, InceptionResNetV2, and the customized CNN model have been trained to learn spectral and temporal patterns in audio signals. Besides this, the models have also been hybridized i.e. Bidirectional LSTM + GRU and RNN + Bidirectional LSTM, and RNN + Bidirectional GRU to analyze their performance for the same labeled data in order to associate particular patterns of sound with their corresponding class of food item. During evaluation, the highest accuracy, precision,F1 score, and recall have been obtained by GRU with 99.28%, Bidirectional LSTM + GRU with 97.7% as well as 97.3%, and RNN + Bidirectional LSTM with 97.45%, respectively. The results of this study demonstrate that deep learning models have the potential to precisely identify foods on the basis of their sound by computing the best outcomes.


Assuntos
Aprendizado Profundo , Humanos , Reconhecimento Psicológico , Alimentos , Rememoração Mental , Registros
3.
Sci Rep ; 14(1): 5753, 2024 03 08.
Artigo em Inglês | MEDLINE | ID: mdl-38459096

RESUMO

Parasitic organisms pose a major global health threat, mainly in regions that lack advanced medical facilities. Early and accurate detection of parasitic organisms is vital to saving lives. Deep learning models have uplifted the medical sector by providing promising results in diagnosing, detecting, and classifying diseases. This paper explores the role of deep learning techniques in detecting and classifying various parasitic organisms. The research works on a dataset consisting of 34,298 samples of parasites such as Toxoplasma Gondii, Trypanosome, Plasmodium, Leishmania, Babesia, and Trichomonad along with host cells like red blood cells and white blood cells. These images are initially converted from RGB to grayscale followed by the computation of morphological features such as perimeter, height, area, and width. Later, Otsu thresholding and watershed techniques are applied to differentiate foreground from background and create markers on the images for the identification of regions of interest. Deep transfer learning models such as VGG19, InceptionV3, ResNet50V2, ResNet152V2, EfficientNetB3, EfficientNetB0, MobileNetV2, Xception, DenseNet169, and a hybrid model, InceptionResNetV2, are employed. The parameters of these models are fine-tuned using three optimizers: SGD, RMSprop, and Adam. Experimental results reveal that when RMSprop is applied, VGG19, InceptionV3, and EfficientNetB0 achieve the highest accuracy of 99.1% with a loss of 0.09. Similarly, using the SGD optimizer, InceptionV3 performs exceptionally well, achieving the highest accuracy of 99.91% with a loss of 0.98. Finally, applying the Adam optimizer, InceptionResNetV2 excels, achieving the highest accuracy of 99.96% with a loss of 0.13, outperforming other optimizers. The findings of this research signify that using deep learning models coupled with image processing methods generates a highly accurate and efficient way to detect and classify parasitic organisms.


Assuntos
Babesia , Aprendizado Profundo , Parasitos , Toxoplasma , Animais , Microscopia
4.
Sci Rep ; 13(1): 22204, 2023 Dec 14.
Artigo em Inglês | MEDLINE | ID: mdl-38097756

RESUMO

The steady two-dimension (2D) ternary nanofluid (TNF) flow across an inclined permeable cylinder/plate is analyzed in the present study. The TNF flow has been examined under the consequences of heat source/sink, permeable medium and mixed convection. For the preparation of TNF, the magnesium oxide (MgO), cobalt ferrite (CoFe2O4) and titanium dioxide (TiO2) are dispersed in water. The rising need for highly efficient cooling mechanisms in several sectors and energy-related processes ultimately inspired the current work. The fluid flow and energy propagation is mathematically described in the form of coupled PDEs. The system of PDEs is reduced into non-dimensional forms of ODEs, which are further numerically handled through the Matlab package (bvp4c). It has been observed that the results display that the porosity factor advances the thermal curve, whereas drops the fluid velocity. The effect of heat source/sink raises the energy field. Furthermore, the plate surface illustrates a leading behavior of energy transport over cylinder geometry versus the variation of ternary nanoparticles (NPs). The energy dissemination rate in the cylinder enhances from 4.73 to 11.421%, whereas for the plate, the energy distribution rate boosts from 6.37 to 13.91% as the porosity factor varies from 0.3 to 0.9.

6.
Sci Rep ; 13(1): 20918, 2023 Nov 27.
Artigo em Inglês | MEDLINE | ID: mdl-38017082

RESUMO

In this article, a low-complexity VLSI architecture based on a radix-4 hyperbolic COordinate Rotion DIgital Computer (CORDIC) is proposed to compute the [Formula: see text] root and [Formula: see text] power of a fixed-point number. The most recent techniques use the radix-2 CORDIC algorithm to compute the root and power. The high computation latency of radix-2 CORDIC is the primary concern for the designers. [Formula: see text] root and [Formula: see text] power computations are divided into three phases, and each phase is performed by a different class of the proposed modified radix-4 CORDIC algorithms in the proposed architecture. Although radix-4 CORDIC can converge faster with fewer recurrences, it demands more hardware resources and computational steps due to its intricate angle selection logic and variable scale factor. We have employed the modified radix-4 hyperbolic vectoring (R4HV) CORDIC to compute logarithms, radix-4 linear vectoring (R4LV) to perform division, and the modified scaling-free radix-4 hyperbolic rotation (R4HR) CORDIC to compute exponential. The criteria to select the amount of rotation in R4HV CORDIC is complicated and depends on the coordinates [Formula: see text] and [Formula: see text] of the rotating vector. In the proposed modified R4HV CORDIC, we have derived the simple selection criteria based on the fact that the inputs to R4HV CORDIC are related. The proposed criteria only depend on the coordinate [Formula: see text] that reduces the hardware complexity of the R4HV CORDIC. The R4HR CORDIC shows the complex scale factor, and compensation of such scale factor necessitates the complex hardware. The complexity of R4HR CORDIC is reduced by pre-computing the scale factor for initial iterations and by employing scaling-free rotations for later iterations. Quantitative hardware analysis suggests better hardware utilization than the recent approaches. The proposed architecture is implemented on a Virtex-6 FPGA, and FPGA implementation demonstrates [Formula: see text] less hardware utilization with better error performance than the approach with the radix-2 CORDIC algorithm.

7.
Sci Rep ; 13(1): 18475, 2023 10 27.
Artigo em Inglês | MEDLINE | ID: mdl-37891188

RESUMO

Agriculture plays a pivotal role in the economies of developing countries by providing livelihoods, sustenance, and employment opportunities in rural areas. However, crop diseases pose a significant threat to both farmers' incomes and food security. Furthermore, these diseases also show adverse effects on human health by causing various illnesses. Till date, only a limited number of studies have been conducted to identify and classify diseased cauliflower plants but they also face certain challenges such as insufficient disease surveillance mechanisms, the lack of comprehensive datasets that are properly labelled as well as are of high quality, and the considerable computational resources that are necessary for conducting thorough analysis. In view of the aforementioned challenges, the primary objective of this manuscript is to tackle these significant concerns and enhance understanding regarding the significance of cauliflower disease identification and detection in rural agriculture through the use of advanced deep transfer learning techniques. The work is conducted on the four classes of cauliflower diseases i.e. Bacterial spot rot, Black rot, Downy Mildew, and No disease which are taken from VegNet dataset. Ten deep transfer learning models such as EfficientNetB0, Xception, EfficientNetB1, MobileNetV2, EfficientNetB2, DenseNet201, EfficientNetB3, InceptionResNetV2, EfficientNetB4, and ResNet152V2, are trained and examined on the basis of root mean square error, recall, precision, F1-score, accuracy, and loss. Remarkably, EfficientNetB1 achieved the highest validation accuracy (99.90%), lowest loss (0.16), and root mean square error (0.40) during experimentation. It has been observed that our research highlights the critical role of advanced CNN models in automating cauliflower disease detection and classification and such models can lead to robust applications for cauliflower disease management in agriculture, ultimately benefiting both farmers and consumers.


Assuntos
Aprendizado Profundo , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos , Humanos , Agricultura , Gerenciamento Clínico , Pesquisa Empírica
8.
Sci Rep ; 13(1): 5372, 2023 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-37005398

RESUMO

Industrial Internet of Things (IIoT) seeks more attention in attaining enormous opportunities in the field of Industry 4.0. But there exist severe challenges related to data privacy and security when processing the automatic and practical data collection and monitoring over industrial applications in IIoT. Traditional user authentication strategies in IIoT are affected by single factor authentication, which leads to poor adaptability along with the increasing users count and different user categories. For addressing such issue, this paper aims to implement the privacy preservation model in IIoT using the advancements of artificial intelligent techniques. The two major stages of the designed system are the sanitization and restoration of IIoT data. Data sanitization hides the sensitive information in IIoT for preventing it from leakage of information. Moreover, the designed sanitization procedure performs the optimal key generation by a new Grasshopper-Black Hole Optimization (G-BHO) algorithm. A multi-objective function involving the parameters like degree of modification, hiding rate, correlation coefficient between the actual data and restored data, and information preservation rate was derived and utilized for generating optimal key. The simulation result establishes the dominance of the proposed model over other state-of the-art models in terms of various performance metrics. In respect of privacy preservation, the proposed G-BHO algorithm has achieved 1%, 15.2%, 12.6%, and 1% enhanced result than JA, GWO, GOA, and BHO, respectively.

9.
Sci Rep ; 12(1): 20804, 2022 12 02.
Artigo em Inglês | MEDLINE | ID: mdl-36460697

RESUMO

Carcinoma is a primary source of morbidity in women globally, with metastatic disease accounting for most deaths. Its early discovery and diagnosis may significantly increase the odds of survival. Breast cancer imaging is critical for early identification, clinical staging, management choices, and treatment planning. In the current study, the FastAI technology is used with the ResNet-32 model to precisely identify ductal carcinoma. ResNet-32 is having few layers comparted to majority of its counterparts with almost identical performance. FastAI offers a rapid approximation toward the outcome for deep learning models via GPU acceleration and a faster callback mechanism, which would result in faster execution of the model with lesser code and yield better precision in classifying the tissue slides. Residual Network (ResNet) is proven to handle the vanishing gradient and effective feature learning better. Integration of two computationally efficient technologies has yielded a precision accuracy with reasonable computational efforts. The proposed model has shown considerable efficiency in the evaluating parameters like sensitivity, specificity, accuracy, and F1 Score against the other dominantly used deep learning models. These insights have shown that the proposed approach might assist practitioners in analyzing Breast Cancer (BC) cases appropriately, perhaps saving future complications and death. Clinical and pathological analysis and predictive accuracy have been improved with digital image processing.


Assuntos
Neoplasias da Mama , Carcinoma Ductal de Mama , Segunda Neoplasia Primária , Feminino , Humanos , Neoplasias da Mama/diagnóstico por imagem , Progressão da Doença , Aceleração
10.
Sensors (Basel) ; 22(23)2022 Dec 02.
Artigo em Inglês | MEDLINE | ID: mdl-36502150

RESUMO

The wearable healthcare equipment is primarily designed to alert patients of any specific health conditions or to act as a useful tool for treatment or follow-up. With the growth of technologies and connectivity, the security of these devices has become a growing concern. The lack of security awareness amongst novice users and the risk of several intermediary attacks for accessing health information severely endangers the use of IoT-enabled healthcare systems. In this paper, a blockchain-based secure data storage system is proposed along with a user authentication and health status prediction system. Firstly, this work utilizes reversed public-private keys combined Rivest-Shamir-Adleman (RP2-RSA) algorithm for providing security. Secondly, feature selection is completed by employing the correlation factor-induced salp swarm optimization algorithm (CF-SSOA). Finally, health status classification is performed using advanced weight initialization adapted SignReLU activation function-based artificial neural network (ASR-ANN) which classifies the status as normal and abnormal. Meanwhile, the abnormal measures are stored in the corresponding patient blockchain. Here, blockchain technology is used to store medical data securely for further analysis. The proposed model has achieved an accuracy of 95.893% and is validated by comparing it with other baseline techniques. On the security front, the proposed RP2-RSA attains a 96.123% security level.


Assuntos
Blockchain , Humanos , Redes Neurais de Computação , Algoritmos , Tecnologia , Atenção à Saúde , Segurança Computacional
12.
Diagnostics (Basel) ; 12(12)2022 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-36553074

RESUMO

The development of genomic technology for smart diagnosis and therapies for various diseases has lately been the most demanding area for computer-aided diagnostic and treatment research. Exponential breakthroughs in artificial intelligence and machine intelligence technologies could pave the way for identifying challenges afflicting the healthcare industry. Genomics is paving the way for predicting future illnesses, including cancer, Alzheimer's disease, and diabetes. Machine learning advancements have expedited the pace of biomedical informatics research and inspired new branches of computational biology. Furthermore, knowing gene relationships has resulted in developing more accurate models that can effectively detect patterns in vast volumes of data, making classification models important in various domains. Recurrent Neural Network models have a memory that allows them to quickly remember knowledge from previous cycles and process genetic data. The present work focuses on type 2 diabetes prediction using gene sequences derived from genomic DNA fragments through automated feature selection and feature extraction procedures for matching gene patterns with training data. The suggested model was tested using tabular data to predict type 2 diabetes based on several parameters. The performance of neural networks incorporating Recurrent Neural Network (RNN) components, Long Short-Term Memory (LSTM), and Gated Recurrent Units (GRU) was tested in this research. The model's efficiency is assessed using the evaluation metrics such as Sensitivity, Specificity, Accuracy, F1-Score, and Mathews Correlation Coefficient (MCC). The suggested technique predicted future illnesses with fair Accuracy. Furthermore, our research showed that the suggested model could be used in real-world scenarios and that input risk variables from an end-user Android application could be kept and evaluated on a secure remote server.

13.
Cancers (Basel) ; 14(17)2022 Aug 29.
Artigo em Inglês | MEDLINE | ID: mdl-36077727

RESUMO

Cancerous tumor cells divide uncontrollably, which results in either tumor or harm to the immune system of the body. Due to the destructive effects of chemotherapy, optimal medications are needed. Therefore, possible treatment methods should be controlled to maintain the constant/continuous dose for affecting the spreading of cancerous tumor cells. Rapid growth of cells is classified into primary and secondary types. In giving a proper response, the immune system plays an important role. This is considered a natural process while fighting against tumors. In recent days, achieving a better method to treat tumors is the prime focus of researchers. Mathematical modeling of tumors uses combined immune, vaccine, and chemotherapies to check performance stability. In this research paper, mathematical modeling is utilized with reference to cancerous tumor growth, the immune system, and normal cells, which are directly affected by the process of chemotherapy. This paper presents novel techniques, which include Bernstein polynomial (BSP) with genetic algorithm (GA), sliding mode controller (SMC), and synergetic control (SC), for giving a possible solution to the cancerous tumor cells (CCs) model. Through GA, random population is generated to evaluate fitness. SMC is used for the continuous exponential dose of chemotherapy to reduce CCs in about forty-five days. In addition, error function consists of five cases that include normal cells (NCs), immune cells (ICs), CCs, and chemotherapy. Furthermore, the drug control process is explained in all the cases. In simulation results, utilizing SC has completely eliminated CCs in nearly five days. The proposed approach reduces CCs as early as possible.

14.
Sensors (Basel) ; 22(16)2022 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-36015869

RESUMO

Wireless sensor networks (WSNs) have recently been viewed as the basic architecture that prepared the way for the Internet of Things (IoT) to arise. Nevertheless, when WSNs are linked with the IoT, a difficult issue arises due to excessive energy utilization in their nodes and short network longevity. As a result, energy constraints in sensor nodes, sensor data sharing and routing protocols are the fundamental topics in WSN. This research presents an enhanced smart-energy-efficient routing protocol (ESEERP) technique that extends the lifetime of the network and improves its connection to meet the aforementioned deficiencies. It selects the Cluster Head (CH) depending on an efficient optimization method derived from several purposes. It aids in the reduction of sleepy sensor nodes and decreases energy utilization. A Sail Fish Optimizer (SFO) is used to find an appropriate route to the sink node for data transfer following CH selection. Regarding energy utilization, bandwidth, packet delivery ratio and network longevity, the proposed methodology is mathematically studied, and the results have been compared to identical current approaches such as a Genetic algorithm (GA), Ant Lion optimization (ALO) and Particle Swarm Optimization (PSO). The simulation shows that in the proposed approach for the longevity of the network, there are 3500 rounds; energy utilization achieves a maximum of 0.5 Joules; bandwidth transmits the data at the rate of 0.52 MBPS; the packet delivery ratio (PDR) is at the rate of 96% for 500 nodes, respectively.


Assuntos
Redes de Comunicação de Computadores , Internet das Coisas , Algoritmos , Animais , Conservação de Recursos Energéticos , Tecnologia sem Fio
15.
Sci Rep ; 12(1): 14523, 2022 08 25.
Artigo em Inglês | MEDLINE | ID: mdl-36008545

RESUMO

With the electric power grid experiencing a rapid shift to the smart grid paradigm over a deregulated energy market, Internet of Things (IoT) based solutions are gaining prominence and innovative Peer To Peer (P2P) energy trading at micro-level are being deployed. Such advancement, however leave traditional security models vulnerable and pave the path for Blockchain, an Distributed Ledger Technology (DLT) with its decentralized, open and transparency characteristics as a viable alternative. However, due to deregulation in energy trading markets, massive volumes of micro transactions are required to be supported, which become a performance bottleneck with existing Blockchain solution such as Hyperledger, Ethereum and so on. In this paper, a lightweight 'Tangle' based framework, namely IOTA (Third generation DLT) is employed for designing an energy trading market that uses Directed Acyclic Graph (DAG) based solution that not only alleviates the reward overhead for micro-transactions but also provides scalability, quantum-proof, and high throughput of such transactions at low confirmation latency. Furthermore the Masked Authentication Messaging (MAM) protocol is used over the IOTA P2P energy trading framework that allows energy producer and consumer to share the data while maintaining the confidentiality, and facilitates the data accessibility. The Raspberry Pi 3 board along with voltage sensor (INA219) used for the setting up light node and publishing and fetching data from the Tangle. The results of the obtained benchmarking indicate low confirmation latency, high throughput, system with Hyperledger Fabric and Ethereum. Moreover, the effect of transaction rate decreases when the IOTA bundle size increases more than 10. For bundle size 5 and 10 it behaves absolutely better than any other platform. The speedy confirmation time of transactions in IOTA, is most suitable for peer to peer energy trading scenarios. This study serves as a guideline for deploying, end-to-end transaction with IOTA Distributed Ledger Technology (DLT) and improving the performance of Blockchain in the energy sector under various operating conditions.


Assuntos
Blockchain , Internet das Coisas , Segurança Computacional , Confidencialidade , Editoração
16.
Comput Intell Neurosci ; 2022: 7588303, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35785077

RESUMO

Developing reliable equity market models allows investors to make more informed decisions. A trading model can reduce the risks associated with investment and allow traders to choose the best-paying stocks. However, stock market analysis is complicated with batch processing techniques since stock prices are highly correlated. In recent years, advances in machine learning have given us a lot of chances to use forecasting theory and risk optimization together. The study postulates a unique two-stage framework. First, the mean-variance approach is utilized to select probable stocks (portfolio construction), thereby minimizing investment risk. Second, we present an online machine learning technique, a combination of "perceptron" and "passive-aggressive algorithm," to predict future stock price movements for the upcoming period. We have calculated the classification reports, AUC score, accuracy, and Hamming loss for the proposed framework in the real-world datasets of 20 health sector indices for four different geographical reasons for the performance evaluation. Lastly, we conduct a numerical comparison of our method's outcomes to those generated via conventional solutions by previous studies. Our aftermath reveals that learning-based ensemble strategies with portfolio selection are effective in comparison.


Assuntos
Inteligência , Aprendizado de Máquina , Algoritmos , Redes Neurais de Computação , Probabilidade
17.
Comput Intell Neurosci ; 2022: 7086632, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35800676

RESUMO

It is vital to develop an appropriate prediction model and link carefully to measurable events such as clinical parameters and patient outcomes to analyze the severity of the disease. Timely identifying retinal diseases is becoming more vital to prevent blindness among young and adults. Investigation of blood vessels delivers preliminary information on the existence and treatment of glaucoma, retinopathy, and so on. During the analysis of diabetic retinopathy, one of the essential steps is to extract the retinal blood vessel accurately. This study presents an improved Gabor filter through various enhancement approaches. The degraded images with the enhancement of certain features can simplify image interpretation both for a human observer and for machine recognition. Thus, in this work, few enhancement approaches such as Gamma corrected adaptively with distributed weight (GCADW), joint equalization of histogram (JEH), homomorphic filter, unsharp masking filter, adaptive unsharp masking filter, and particle swarm optimization (PSO) based unsharp masking filter are taken into consideration. In this paper, an effort has been made to improve the performance of the Gabor filter by combining it with different enhancement methods and to enhance the detection of blood vessels. The performance of all the suggested approaches is assessed on publicly available databases such as DRIVE and CHASE_DB1. The results of all the integrated enhanced techniques are analyzed, discussed, and compared. The best result is delivered by PSO unsharp masking filter combined with the Gabor filter with an accuracy of 0.9593 for the DRIVE database and 0.9685 for the CHASE_DB1 database. The results illustrate the robustness of the recommended model in automatic blood vessel segmentation that makes it possible to be a clinical support decision tool in diabetic retinopathy diagnosis.


Assuntos
Retinopatia Diabética , Algoritmos , Bases de Dados Factuais , Humanos , Aumento da Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Vasos Retinianos
18.
Sensors (Basel) ; 22(13)2022 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-35808508

RESUMO

Cloud providers create a vendor-locked-in environment by offering proprietary and non-standard APIs, resulting in a lack of interoperability and portability among clouds. To overcome this deterrent, solutions must be developed to exploit multiple clouds efficaciously. This paper proposes a middleware platform to mitigate the application portability issue among clouds. A literature review is also conducted to analyze the solutions for application portability. The middleware allows an application to be ported on various platform-as-a-service (PaaS) clouds and supports deploying different services of an application on disparate clouds. The efficiency of the abstraction layer is validated by experimentation on an application that uses the message queue, Binary Large Objects (BLOB), email, and short message service (SMS) services of various clouds via the proposed middleware against the same application using these services via their native code. The experimental results show that adding this middleware mildly affects the latency, but it dramatically reduces the developer's overhead of implementing each service for different clouds to make it portable.


Assuntos
Software
19.
Comput Intell Neurosci ; 2022: 1883698, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35720939

RESUMO

With the rapid advancement of information technology, online information has been exponentially growing day by day, especially in the form of text documents such as news events, company reports, reviews on products, stocks-related reports, medical reports, tweets, and so on. Due to this, online monitoring and text mining has become a prominent task. During the past decade, significant efforts have been made on mining text documents using machine and deep learning models such as supervised, semisupervised, and unsupervised. Our area of the discussion covers state-of-the-art learning models for text mining or solving various challenging NLP (natural language processing) problems using the classification of texts. This paper summarizes several machine learning and deep learning algorithms used in text classification with their advantages and shortcomings. This paper would also help the readers understand various subtasks, along with old and recent literature, required during the process of text classification. We believe that readers would be able to find scope for further improvements in the area of text classification or to propose new techniques of text classification applicable in any domain of their interest.


Assuntos
Mineração de Dados , Processamento de Linguagem Natural , Algoritmos , Mineração de Dados/métodos , Aprendizado de Máquina , Publicações
20.
Comput Intell Neurosci ; 2022: 3854635, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35528334

RESUMO

Recent imaging science and technology discoveries have considered hyperspectral imagery and remote sensing. The current intelligent technologies, such as support vector machines, sparse representations, active learning, extreme learning machines, transfer learning, and deep learning, are typically based on the learning of the machines. These techniques enrich the processing of such three-dimensional, multiple bands, and high-resolution images with their precision and fidelity. This article presents an extensive survey depicting machine-dependent technologies' contributions and deep learning on landcover classification based on hyperspectral images. The objective of this study is three-fold. First, after reading a large pool of Web of Science (WoS), Scopus, SCI, and SCIE-indexed and SCIE-related articles, we provide a novel approach for review work that is entirely systematic and aids in the inspiration of finding research gaps and developing embedded questions. Second, we emphasize contemporary advances in machine learning (ML) methods for identifying hyperspectral images, with a brief, organized overview and a thorough assessment of the literature involved. Finally, we draw the conclusions to assist researchers in expanding their understanding of the relationship between machine learning and hyperspectral images for future research.


Assuntos
Aprendizado de Máquina , Máquina de Vetores de Suporte
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...